In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
Benefiting from its single-photon sensitivity, single-photon avalanche diode (SPAD) array has been widely applied in various fields such as fluorescence lifetime imaging and quantum computing. However, large-scale high-fidelity single-photon imaging remains a big challenge, due to the complex hardware manufacture craft and heavy noise disturbance of SPAD arrays. In this work, we introduce deep learning into SPAD, enabling super-resolution single-photon imaging over an order of magnitude, with significant enhancement of bit depth and imaging quality. We first studied the complex photon flow model of SPAD electronics to accurately characterize multiple physical noise sources, and collected a real SPAD image dataset (64 $\times$ 32 pixels, 90 scenes, 10 different bit depth, 3 different illumination flux, 2790 images in total) to calibrate noise model parameters. With this real-world physical noise model, we for the first time synthesized a large-scale realistic single-photon image dataset (image pairs of 5 different resolutions with maximum megapixels, 17250 scenes, 10 different bit depth, 3 different illumination flux, 2.6 million images in total) for subsequent network training. To tackle the severe super-resolution challenge of SPAD inputs with low bit depth, low resolution, and heavy noise, we further built a deep transformer network with a content-adaptive self-attention mechanism and gated fusion modules, which can dig global contextual features to remove multi-source noise and extract full-frequency details. We applied the technique on a series of experiments including macroscopic and microscopic imaging, microfluidic inspection, and Fourier ptychography. The experiments validate the technique's state-of-the-art super-resolution SPAD imaging performance, with more than 5 dB superiority on PSNR compared to the existing methods.
translated by 谷歌翻译
In this paper, we study a novel and widely existing problem in graph matching (GM), namely, Bi-level Noisy Correspondence (BNC), which refers to node-level noisy correspondence (NNC) and edge-level noisy correspondence (ENC). In brief, on the one hand, due to the poor recognizability and viewpoint differences between images, it is inevitable to inaccurately annotate some keypoints with offset and confusion, leading to the mismatch between two associated nodes, i.e., NNC. On the other hand, the noisy node-to-node correspondence will further contaminate the edge-to-edge correspondence, thus leading to ENC. For the BNC challenge, we propose a novel method termed Contrastive Matching with Momentum Distillation. Specifically, the proposed method is with a robust quadratic contrastive loss which enjoys the following merits: i) better exploring the node-to-node and edge-to-edge correlations through a GM customized quadratic contrastive learning paradigm; ii) adaptively penalizing the noisy assignments based on the confidence estimated by the momentum teacher. Extensive experiments on three real-world datasets show the robustness of our model compared with 12 competitive baselines.
translated by 谷歌翻译
Perception algorithms in autonomous driving systems confront great challenges in long-tail traffic scenarios, where the problems of Safety of the Intended Functionality (SOTIF) could be triggered by the algorithm performance insufficiencies and dynamic operational environment. However, such scenarios are not systematically included in current open-source datasets, and this paper fills the gap accordingly. Based on the analysis and enumeration of trigger conditions, a high-quality diverse dataset is released, including various long-tail traffic scenarios collected from multiple resources. Considering the development of probabilistic object detection (POD), this dataset marks trigger sources that may cause perception SOTIF problems in the scenarios as key objects. In addition, an evaluation protocol is suggested to verify the effectiveness of POD algorithms in identifying the key objects via uncertainty. The dataset never stops expanding, and the first batch of open-source data includes 1126 frames with an average of 2.27 key objects and 2.47 normal objects in each frame. To demonstrate how to use this dataset for SOTIF research, this paper further quantifies the perception SOTIF entropy to confirm whether a scenario is unknown and unsafe for a perception system. The experimental results show that the quantified entropy can effectively and efficiently reflect the failure of the perception algorithm.
translated by 谷歌翻译
激光雷达语义分割的当前方法对于现实世界应用,例如自动驾驶,因为它是封闭式和静态的。封闭设置的假设使网络只能输出训练的类的标签,即使是从未见过的对象,而静态网络也无法根据所看到的知识来更新其知识库。因此,在这项工作中,我们提出了激光点云的开放世界语义细分任务,其目的是1)使用开放式语义分段确定旧类和新颖的类,以及2)逐渐将新颖对象纳入现有知识库中使用增量学习而不会忘记旧课程。为此,我们提出了一个冗余分类器(真实)框架,以为开放式语义细分和增量学习问题提供一般体系结构。实验结果表明,真实可以同时在Semantickitti和Nuscenes数据集中的开放式语义分割任务中实现最新性能,并在增量学习过程中减轻灾难性遗忘问题,并减少较大的利润率。
translated by 谷歌翻译
This paper describes the PASH participation in TREC 2021 Deep Learning Track. In the recall stage, we adopt a scheme combining sparse and dense retrieval method. In the multi-stage ranking phase, point-wise and pair-wise ranking strategies are used one after another based on model continual pre-trained on general knowledge and document-level data. Compared to TREC 2020 Deep Learning Track, we have additionally introduced the generative model T5 to further enhance the performance.
translated by 谷歌翻译
像素合成是图像生成的有前途的研究范式,可以很好地利用像素的先验知识来生成。但是,现有方法仍然遭受过多的内存足迹和计算开销。在本文中,我们提出了一个渐进的像素合成网络,用于有效的图像生成,以像素型构成。具体而言,PixelFolder将图像生成作为渐进的像素回归问题制定,并通过多阶段结构合成图像,这可以大大减少由大型张量转换引起的开销。此外,我们引入了新型的像素折叠操作,以进一步提高模型效率,同时保持像素的先验知识以进行端到端回归。通过这些创新的设计,我们大大减少了像素合成的支出,例如,与最新的像素合成方法CIPS相比,减少了89%的计算和53%的参数。为了验证我们的方法,我们在两个基准数据集(即FFHQ和LSUN教堂)上进行了广泛的实验。实验结果表明,PixelFolder的支出要少得多,在两个基准数据集上获得了新的最先进(SOTA)性能,即3.77 FID和2.45 FID在FFHQ和LSUN教堂上。比SOTA方法效率高,例如stylegan2,分别降低了约72%的计算和31%的参数。这些结果极大地验证了所提出的像素的有效性。
translated by 谷歌翻译
抗癌药物的发现是偶然的,我们试图介绍开放的分子图学习基准,称为Cantidrug4cancer,这是一个具有挑战性且逼真的基准数据集,可促进可扩展,健壮和可重复的图形机器学习用于抗癌药物发现的机器学习研究。候选物4CANCER数据集涵盖了多个最多的癌症靶标,涵盖了54869个与癌症相关的药物分子,其范围从临床前,临床和FDA批准的范围内。除了构建数据集外,我们还使用描述符和表达性图神经网络进行了有效的药物靶点相互作用(DTI)预测基准的基准实验。实验结果表明,候选物4Cancer在实际应用中对学习分子图和目标提出了重大挑战,这表明将来有机会开发用于治疗癌症的候选药物的研究。
translated by 谷歌翻译
图形神经网络(GNNS)将深度神经网络(DNN)的成功扩展到非欧几里德图数据,实现了各种任务的接地性能,例如节点分类和图形属性预测。尽管如此,现有系统效率低,培训数十亿节点和GPU的节点和边缘训练大图。主要瓶颈是准备GPU数据的过程 - 子图采样和特征检索。本文提出了一个分布式GNN培训系统的BGL,旨在解决一些关键思想的瓶颈。首先,我们提出了一种动态缓存引擎,以最小化特征检索流量。通过协同设计缓存政策和抽样顺序,我们发现低开销和高缓存命中率的精美斑点。其次,我们改善了曲线图分区算法,以减少子图采样期间的交叉分区通信。最后,仔细资源隔离减少了不同数据预处理阶段之间的争用。关于各种GNN模型和大图数据集的广泛实验表明,BGL平均明显优于现有的GNN训练系统20.68倍。
translated by 谷歌翻译
在像地震等自然灾害后建立损伤检测对于启动有效的应急行动至关重要。远程感测的非常高空间分辨率(VHR)图像可以提供由于它们具有高几何精度的受影响建筑物的能力而提供重要信息。已经开发出许多方法来检测由于地震因地震而受损的建筑物。但是,使用深神经网络(DNN)已经支付了利用VHR图像中所代表的丰富的功能。本文提出了一种基于DNN和改进的分段方法的新型超像素的方法,从VHR图像中检测损坏的建筑物。首先,扩展了修改的快速扫描和自适应合并方法以创建初始过分分割。其次,基于相邻图(RAG)的区域合并段,被认为是由局部二进制模式(LBP)纹理,光谱和形状特征组成的改进的语义相似性标准。第三,呈现了使用堆叠的去噪自动编码器的预训练的DNN,称为SDAE-DNN,以利用丰富的语义特征来构建损坏检测。 SDAE-DNN的深层特征抽象可以通过学习更多内在和鉴别特征来提高检测精度,这使得使用最先进的替代分类器的其他方法表现优于其他方法。我们展示了我们在尼泊尔Bhaktapur的复杂城市地区使用WorldView-2图像的方法的可行性和有效性,这是受2015年4月25日的尼泊尔地震影响的。
translated by 谷歌翻译